-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Allow multiprocessshared to spawn process and delete directly with obj #37112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
R: @damccorm |
Summary of ChangesHello @AMOOOMA, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment |
| assert self._SingletonProxy_valid | ||
| self._SingletonProxy_valid = False | ||
|
|
||
| def unsafe_hard_delete(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you help me understand why we need the unsafe_hard_delete changes? Its not really clear to me what behavior this enables which we can't already do
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's mainly because the way that models are passed around is directly a _SingletonProxy instead of _SingletonEntry so we would need a way to directly call delete with the _SingletonProxy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok - lets at least give it a name like singletonProxy_unsafe_hard_delete. Otherwise we will run into issues if someone has an object with a function or property called unsafe_hard_delete, which seems like it could happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sg! Updated.
| self.__dict__.update(state) | ||
|
|
||
| def __getstate__(self): | ||
| return self.__dict__ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this is so that this is pickleable, but is it valid? Normally I'd expect this to not be pickleable since the proxy objects aren't necessarily valid in another context
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is exactly what was needed for the pickling stuff. It does seems to be valid in testing with the custom built beam version loaded on custom container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would only be valid if you unpickle onto the same machine (and maybe even in the same process). Could you remind me what unpickling issues you ran into?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just tried removing these and run the test locally, it's this infinite recursion thing that will happen if i have a proxy on a proxy
<string>:2: in make_proxy
???
../../../../.pyenv/versions/3.11.14/lib/python3.11/multiprocessing/managers.py:822: in _callmethod
kind, result = conn.recv()
^^^^^^^^^^^
../../../../.pyenv/versions/3.11.14/lib/python3.11/multiprocessing/connection.py:251: in recv
return _ForkingPickler.loads(buf.getbuffer())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
apache_beam/utils/multi_process_shared.py:226: in __getattr__
return getattr(self._proxyObject, name)
^^^^^^^^^^^^^^^^^
apache_beam/utils/multi_process_shared.py:226: in __getattr__
return getattr(self._proxyObject, name)
^^^^^^^^^^^^^^^^^
apache_beam/utils/multi_process_shared.py:226: in __getattr__
return getattr(self._proxyObject, name)
^^^^^^^^^^^^^^^^^
E RecursionError: maximum recursion depth exceeded
!!! Recursion detected (same locals & position)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by proxy on proxy I meant that first a MultiProcessShared object is created and the instance initialized inside it also try to create multiprocessshared objects. So for example like this test
class SimpleClass:
def make_proxy(
self, tag: str = 'proxy_on_proxy', spawn_process: bool = False):
return multi_process_shared.MultiProcessShared(
Counter, tag=tag, always_proxy=True,
spawn_process=spawn_process).acquire()
def test_proxy_on_proxy(self):
shared1 = multi_process_shared.MultiProcessShared(
SimpleClass, tag='proxy_on_proxy_main', always_proxy=True)
instance = shared1.acquire()
proxy_instance = instance.make_proxy()
self.assertEqual(proxy_instance.increment(), 1)
| """Checks if parent is alive every second.""" | ||
| while True: | ||
| try: | ||
| os.kill(parent_pid, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we sending a kill signal to the parent process? Isn't this the opposite of what we want?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not actually a kill signal but uses that interface to send a check, it will fail with OSError if the parent_pid is dead and if alive nothing happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if alive nothing happens.
Could you help me understand why this happens? https://www.geeksforgeeks.org/python/python-os-kill-method/ seems to say this will actually send the kill signal. Does the parent just ignore it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be a good description https://man7.org/linux/man-pages/man2/kill.2.html
In short if signal 0 is passed in it actually doesn't send the kill signal and instead just do existence and permission checks.
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant enhancement to MultiProcessShared by allowing it to spawn a dedicated server process and providing a mechanism for forceful deletion. The implementation is robust, incorporating features like a "suicide pact" for server process lifecycle management and detailed error reporting from the child to the parent process. The accompanying tests are thorough, covering various edge cases. I have a few suggestions to further improve the code, mainly around removing a redundant line of code, enhancing logging in exception handlers, and fixing a minor bug in the test setup.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.